18 research outputs found
Fast Biclustering by Dual Parameterization
We study two clustering problems, Starforest Editing, the problem of adding
and deleting edges to obtain a disjoint union of stars, and the generalization
Bicluster Editing. We show that, in addition to being NP-hard, none of the
problems can be solved in subexponential time unless the exponential time
hypothesis fails.
Misra, Panolan, and Saurabh (MFCS 2013) argue that introducing a bound on the
number of connected components in the solution should not make the problem
easier: In particular, they argue that the subexponential time algorithm for
editing to a fixed number of clusters (p-Cluster Editing) by Fomin et al. (J.
Comput. Syst. Sci., 80(7) 2014) is an exception rather than the rule. Here, p
is a secondary parameter, bounding the number of components in the solution.
However, upon bounding the number of stars or bicliques in the solution, we
obtain algorithms which run in time for p-Starforest
Editing and for p-Bicluster Editing. We
obtain a similar result for the more general case of t-Partite p-Cluster
Editing. This is subexponential in k for fixed number of clusters, since p is
then considered a constant.
Our results even out the number of multivariate subexponential time
algorithms and give reasons to believe that this area warrants further study.Comment: Accepted for presentation at IPEC 201
Exploring Subexponential Parameterized Complexity of Completion Problems
Let be a family of graphs. In the -Completion problem,
we are given a graph and an integer as input, and asked whether at most
edges can be added to so that the resulting graph does not contain a
graph from as an induced subgraph. It appeared recently that special
cases of -Completion, the problem of completing into a chordal graph
known as Minimum Fill-in, corresponding to the case of , and the problem of completing into a split graph,
i.e., the case of , are solvable in parameterized
subexponential time . The exploration of this
phenomenon is the main motivation for our research on -Completion.
In this paper we prove that completions into several well studied classes of
graphs without long induced cycles also admit parameterized subexponential time
algorithms by showing that:
- The problem Trivially Perfect Completion is solvable in parameterized
subexponential time , that is -Completion for , a cycle and a path on four
vertices.
- The problems known in the literature as Pseudosplit Completion, the case
where , and Threshold Completion, where , are also solvable in time .
We complement our algorithms for -Completion with the following
lower bounds:
- For , , , and
, -Completion cannot be solved in time
unless the Exponential Time Hypothesis (ETH) fails.
Our upper and lower bounds provide a complete picture of the subexponential
parameterized complexity of -Completion problems for .Comment: 32 pages, 16 figures, A preliminary version of this paper appeared in
the proceedings of STACS'1
A survey of parameterized algorithms and the complexity of edge modification
The survey is a comprehensive overview of the developing area of parameterized algorithms for graph modification problems. It describes state of the art in kernelization, subexponential algorithms, and parameterized complexity of graph modification. The main focus is on edge modification problems, where the task is to change some adjacencies in a graph to satisfy some required properties. To facilitate further research, we list many open problems in the area.publishedVersio
Computing complexity measures of degenerate graphs
We show that the VC-dimension of a graph can be computed in time , where is the degeneracy of the input graph. The core idea
of our algorithm is a data structure to efficiently query the number of
vertices that see a specific subset of vertices inside of a (small) query set.
The construction of this data structure takes time , afterwards
queries can be computed efficiently using fast M\"obius inversion.
This data structure turns out to be useful for a range of tasks, especially
for finding bipartite patterns in degenerate graphs, and we outline an
efficient algorithms for counting the number of times specific patterns occur
in a graph. The largest factor in the running time of this algorithm is
, where is a parameter of the pattern we call its left covering
number.
Concrete applications of this algorithm include counting the number of
(non-induced) bicliques in linear time, the number of co-matchings in quadratic
time, as well as a constant-factor approximation of the ladder index in linear
time.
Finally, we supplement our theoretical results with several implementations
and run experiments on more than 200 real-world datasets -- the largest of
which has 8 million edges -- where we obtain interesting insights into the
VC-dimension of real-world networks.Comment: Accepted for publication in the 18th International Symposium on
Parameterized and Exact Computation (IPEC 2023
On the number of types in sparse graphs
We prove that for every class of graphs which is nowhere dense,
as defined by Nesetril and Ossona de Mendez, and for every first order formula
, whenever one draws a graph and a
subset of its nodes , the number of subsets of which are of
the form
for some valuation of in is bounded by
, for every . This provides
optimal bounds on the VC-density of first-order definable set systems in
nowhere dense graph classes.
We also give two new proofs of upper bounds on quantities in nowhere dense
classes which are relevant for their logical treatment. Firstly, we provide a
new proof of the fact that nowhere dense classes are uniformly quasi-wide,
implying explicit, polynomial upper bounds on the functions relating the two
notions. Secondly, we give a new combinatorial proof of the result of Adler and
Adler stating that every nowhere dense class of graphs is stable. In contrast
to the previous proofs of the above results, our proofs are completely
finitistic and constructive, and yield explicit and computable upper bounds on
quantities related to uniform quasi-wideness (margins) and stability (ladder
indices)
Two-sets cut-uncut on planar graphs
We study the following Two-Sets Cut-Uncut problem on planar graphs. Therein,
one is given an undirected planar graph and two sets of vertices and
. The question is, what is the minimum number of edges to remove from ,
such that we separate all of from all of , while maintaining that every
vertex in , and respectively in , stays in the same connected component.
We show that this problem can be solved in time with a
one-sided error randomized algorithm. Our algorithm implies a polynomial-time
algorithm for the network diversion problem on planar graphs, which resolves an
open question from the literature. More generally, we show that Two-Sets
Cut-Uncut remains fixed-parameter tractable even when parameterized by the
number of faces in the plane graph covering the terminals , by
providing an algorithm of running time .Comment: 22 pages, 5 figure
Kernelization and Sparseness: the case of Dominating Set
We prove that for every positive integer and for every graph class
of bounded expansion, the -Dominating Set problem admits a
linear kernel on graphs from . Moreover, when is only
assumed to be nowhere dense, then we give an almost linear kernel on for the classic Dominating Set problem, i.e., for the case . These
results generalize a line of previous research on finding linear kernels for
Dominating Set and -Dominating Set. However, the approach taken in this
work, which is based on the theory of sparse graphs, is radically different and
conceptually much simpler than the previous approaches.
We complement our findings by showing that for the closely related Connected
Dominating Set problem, the existence of such kernelization algorithms is
unlikely, even though the problem is known to admit a linear kernel on
-topological-minor-free graphs. Also, we prove that for any somewhere dense
class , there is some for which -Dominating Set is
W[]-hard on . Thus, our results fall short of proving a sharp
dichotomy for the parameterized complexity of -Dominating Set on
subgraph-monotone graph classes: we conjecture that the border of tractability
lies exactly between nowhere dense and somewhere dense graph classes.Comment: v2: new author, added results for r-Dominating Sets in bounded
expansion graph
Parameterized Graph Modification Algorithms
Graph modification problems form an important class of algorithmic problems in computer science. In this thesis, we study edge modification problems towards classes related to chordal graphs, with the main focus on trivially perfect graphs and threshold graphs. We provide several new results in classical complexity, kernelization complexity, and subexponential parameterized complexity. In all cases we give positive and negative results—giving polynomial time algorithms as well as NP-hardness results, polynomial kernels as well as polynomial kernel impossibility results, and we give subexponential time algorithms, and show that many problems do not admit such algorithms unless the exponential time hypothesis fails. Our main focus is on the subexponential time complexity of edge modification problems. For that to make sense, we first need to figure out whether or not we actually need super-polynomial time. We show that editing towards trivially perfect graphs, threshold graphs, and chain graphs are all NP-complete, resolving 15 year old open questions. When a problem is shown to be NP-complete, we study exactly how much exponential time is needed for an algorithm to solve it. We provide several subexponential time algorithms, for, e.g., editing towards chain graphs and threshold graphs, as well as completing towards trivially perfect graphs. We complement our results by showing that small alterations in the target graph classes yields much harder problems: Editing towards trivially perfect graphs and cographs is not possible in subexponential time unless the exponential time hypothesis fails. A first step in our subexponential time algorithms, and an otherwise natural first step in dealing with NP-hard problems is offered by the toolbox of polynomial kernelization. In polynomial kernelizations, we are asked to design polynomial time compression algorithms that shrink the input instances to output instances bounded polynomially in a yes-solution. We provide polynomial kernels for all edge modification problems towards trivially perfect graphs, threshold graphs and chain graphs. In addition, we show that on bounded degree input graphs, we obtain polynomial kernels for any editing or deletion problem towards graph classes characterizable by a finite set of forbidden induced subgraphs. Finally, we show that we should not expect the same result for completion problems by proving that such a compression algorithm would imply the collapse of the polynomial hierarchy